10 research outputs found

    GTH-UPM system for search on speech evaluation

    Get PDF
    This paper describes the GTH-UPM system for the Albayzin 2014 Search on Speech Evaluation. Teh evaluation task consists of searching a list of terms/queries in audio files. The GTH-UPM system we are presenting is based on a LVCSR (Large Vocabulary Continuous Speech Recognition) system. We have used MAVIR corpus and the Spanish partition of the EPPS (European Parliament Plenary Sessions) database for training both acoustic and language models. The main effort has been focused on lexicon preparation and text selection for the language model construction. The system makes use of different lexicon and language models depending on the task that is performed. For the best configuration of the system on the development set, we have obtained a FOM of 75.27 for the deyword spotting task

    Spoken term detection ALBAYZIN 2014 evaluation: overview, systems, results, and discussion

    Get PDF
    The electronic version of this article is the complete one and can be found online at: http://dx.doi.org/10.1186/s13636-015-0063-8Spoken term detection (STD) aims at retrieving data from a speech repository given a textual representation of the search term. Nowadays, it is receiving much interest due to the large volume of multimedia information. STD differs from automatic speech recognition (ASR) in that ASR is interested in all the terms/words that appear in the speech data, whereas STD focuses on a selected list of search terms that must be detected within the speech data. This paper presents the systems submitted to the STD ALBAYZIN 2014 evaluation, held as a part of the ALBAYZIN 2014 evaluation campaign within the context of the IberSPEECH 2014 conference. This is the first STD evaluation that deals with Spanish language. The evaluation consists of retrieving the speech files that contain the search terms, indicating their start and end times within the appropriate speech file, along with a score value that reflects the confidence given to the detection of the search term. The evaluation is conducted on a Spanish spontaneous speech database, which comprises a set of talks from workshops and amounts to about 7 h of speech. We present the database, the evaluation metrics, the systems submitted to the evaluation, the results, and a detailed discussion. Four different research groups took part in the evaluation. Evaluation results show reasonable performance for moderate out-of-vocabulary term rate. This paper compares the systems submitted to the evaluation and makes a deep analysis based on some search term properties (term length, in-vocabulary/out-of-vocabulary terms, single-word/multi-word terms, and in-language/foreign terms).This work has been partly supported by project CMC-V2 (TEC2012-37585-C02-01) from the Spanish Ministry of Economy and Competitiveness. This research was also funded by the European Regional Development Fund, the Galician Regional Government (GRC2014/024, “Consolidation of Research Units: AtlantTIC Project” CN2012/160)

    Contributions to Speech and Language processing towards Automatic Speech Recognizers with Evolving Dictionaries

    Full text link
    Automatic speech recognizers offer nowadays a notable performance in multiple and challenging tasks. This has propitiated their incorporation in user applications that are used very frequently. Such presence in the habitual human life has been reinforced due to the fact that speech is one of the most natural ways of human communication. And thus, our interaction with machines is evolving towards a speech-based communication, for accomplishing both simple tasks like dictation or web searches, and more complex tasks like human-machine dialogs for configuring e.g. domestic appliances. The acoustic and language conditions of the audios that serve as input for these systems can vary enormously from source to source. Thus, a great work into improving their acoustic and language models has kept developing continuously since the last decades, where a recent impulse is being experimented thanks to the resurgence of neural networks. On the acoustic part, the effort is focusing on dealing with any kind of noises that may be present on the audio, on being able to recognize the speech of any kind of speaker (including different accents and any other varying feature of the human voice) and on addressing any additional condition of the speech process, like far-field recognition. On the language part, the effort is focusing on how to manage the large amount of different topics and domains that can be present in the speech. Beyond a brute-force approach, where a huge vocabulary and language model is used to try to face any of these scenarios, more interest is growing for systems that can restrict their modeling capabilities to recognize optimally certain topics or domains. This restriction allows the language models to be centered in the uses of language of interest, which otherwise would result in an inoperably big model (and likely less accurate). Furthermore, a variable language restriction which keeps tuning the model to the language characteristics of the current speech would offer the recognition system an adaptation capability able to reach optimal performance regardless the topic of the input speech. This kind of adaptation is the main focus of this thesis work. More specifically, we are interested in an automatic and unsupervised adaptation to the current speech which does not require an explicit identification of the current topic nor domain which would drive the adaptation process. Instead, we want the adaptation to be driven by the vocabulary that is employed in the speech, so as to tune accordingly the dictionaries of the recognition systems (with effects in both their vocabularies and language models). Apart from the in-vocabulary (IV) words that we may be able to decode from the speech, we are more interested in the words that were not present in the current vocabulary of the systems, or out-of-vocabulary (OOV) words, as they can indicate implicitly changes of topic or domain. Thus, we propose strategies for detecting OOV terms in the speech, finding the best candidate words for them and ultimately learning about its syntax and semantics so they can be incorporated properly into the recognition systems, causing modifications that result in an adaptation, considering together various OOVs appearances and resolutions. The strategies that we propose in this thesis work can be divided in two levels of operations, one that works in a local, static level, and one that works in a dynamic, evolving level. They are called static Term Discovery strategy and dynamic Vocabulary strategy, respectively. The processes involved in the static Term Discovery strategy can be enumerated as follows: _ Detection of OOV terms that might appear in the input speech. We have contributed with a couple of OOV detection methods that can work in conjunction. The first method employs a OOV word model defined in both acoustic and language terms, while the second method is based on confidence analyses over the output word lattice that is delivered by the recognition systems. _ Search for candidate words for every OOV detection. We have contributed with a search scheme that performs two different kind of searches, one that is acoustically driven, and another that is semantically driven. For this scheme, we take advantage of external knowledge sources where to find the best candidates. Furthermore, we also proposed a distributed representation scheme for the resources found in graph-organized corpora. These type of representations can benefit this search, and could be also employed in other semantic tasks like those found in the area of natural language processing and understanding. _ Correction of the output transcription with the best candidate found, if any. We have contributed with a series of candidate scores that can improve the decision whether a candidate is suitable enough to substitute the original content of a detected OOV region. And as regards the dynamic Vocabulary strategy, we proposed a series of processes to be executed iteratively when needed, as a reaction to the speech being decoded: _ Continuous collection of the terms retrieved by the static Term Discovery strategy, so as to assess whether some terms become interesting to be added to the system’s vocabulary. We have contributed with a scoring scheme that takes into account the scores that the static Term Discovery strategy gave to a word and also the time passed between the moments when that word was retrieved, so as to give more importance to the new terms used more recently. _ Selection of the most interesting new terms from the previous collection to add to the system’s vocabulary, and also selection of the least interesting IV terms that will be removed from the vocabulary. We have contributed with schema for both selecting terms to add to and delete from the vocabulary. For the words to add, we verify whether there are enough trainable material in external sources about the new terms, and we can also reconsider whether the transcription corrections made by a term were reliable enough, so as to refine our decision. And for the word to remove, we consider both which IV words are not being employed enough in the input speech, and which words do not fit sufficiently the current state of the language model of the system. _ Update of the vocabulary and language model of the recognition systems considering the previous word selections. We have contributed with an update scheme that considers both the previous language model of the system and a language model built with texts from external knowledge sources that contain the new terms, proposing as well an interpolation scheme of both models so as to produce a new language model with the designated vocabulary. The proposed strategies have been evaluated under realistic experimental frameworks, where we employed state-of-the-art automatic speech recognizers, designed with different sizes of the vocabulary, and large external knowledge sources. The speech test corpora contains a great variety of speakers and natural, spontaneous speech, where multiple topics are discussed. Such features are in consonance with the conditions for which we want our strategies to offer benefits. The evaluation results allow us to measure how noticeable are the improvements given to the recognition systems that are equipped with our strategies, in comparison with the systems which lack them. In fact, we were able to achieve significant improvements over the baseline systems for both strategies and in most of the experimental configurations. Lastly, we were also able to study the behavior of the dynamic systems over time, in order to assess how fast and in which manner the desired adaptation is happening. We observed that, on average, the dynamic systems were capable of offering significant improvements over the baseline systems (and even over the systems equipped with the first, static strategy) just after few hours of operation and for very different speech corpora. Also, the type and number of words that the dynamic systems added/deleted over time were consistent with the expected behavior (mainly, adding words that actually appeared in the input speech and removing those that did not fit enough in the current state of the evolving language model), which accounts for the notable benefits that were measured. ----------RESUMEN---------- Los reconocedores automáticos de habla ofrecen a día de hoy un rendimiento muy notable en múltiples y complicadas tareas. Esto ha propiciado su incorporación en aplicaciones de usuario que ya son usadas con cierta asiduidad. Tal presencia en la cotidianidad de la vida humana se ha visto reforzada por el hecho de que el habla es una de las formas más naturales para comunicarnos. Así, nuestra interacción con máquinas está evolucionando hacia una comunicación basada en el habla, ya sea para realizar tareas tan simples como un dictado o búsquedas web, o más complejas como las que involucran diálogos para configurar, por ejemplo, aparatos domésticos programables. Las condiciones acústicas y lingüísticas de los audios que sirven de entrada a estos sistemas pueden variar enormemente de una fuente a otra. Así, se sigue desempeñando de continuo, y desde las úlimas décadas, un notable trabajo para mejorar los modelos acústicos y de lenguaje que estos sistemas emplean, habiéndose hallado un reciente impulso en tales investigaciones gracias al resurgimiento de las redes neuronales. En la parte acústica, el esfuerzo se centra en lidiar con cualquier tipo de ruidos que pueden estar presentes en el audio, también en ser capaces de reconocer el habla de cualquier tipo de locutor (incluyendo distintos acentos y cualquier otra característica que pueda variar entre voces humanas) y, en fin, en abordar cualquier otra condición presente en el proceso de habla, como el reconocimiento en campo lejano. En la parte lingüística, el esfuerzo se centra en cómo gestionar la gran cantidad de temáticas y dominios distintos que están presentes en el habla. Más allá de una solución de fuerza bruta, en la cual emplearíamos un vocabulario y un modelo de lenguaje gigantescos para tratar de enfrentarnos con cierto éxito a cualquiera de estos escenarios, resultan más interesantes los sistemas que pueden restringir sus capacidades de modelado para reconocer óptimamente ciertos tópicos o dominios. Dicha restricción permitiría que los modelos de lenguaje se centrasen en los usos lingüísticos bajo interés, que de otro modo desembocaría en un modelo tan grande que sería inoperable (y probablemente menos preciso). Además, una restrición del lenguaje que fuera variable en el tiempo, manteniendo una sintonización del modelo con las características del habla de cada momento, ofrecería a los sistemas de reconocimiento una capacidad de adaptación que sería capaz de alcanzar un rendimiento óptimo, sin importar las temáticas que hayan podido aparecer en el habla. Este tipo de adaptación es el principal enfoque que hemos tomado en este trabajo de tesis. Específicamente, estamos interesados en una adaptación automática y no supervisada al habla de cada momento, y que por tanto no requiera de una identificación explícita de la temática discutida en cada momento que pudiera dirigir el proceso de adaptación. En su lugar, queremos que la adaptación esté dirigida por el vocabulario que está siendo empleado en el habla, para de este modo tratar de sintonizar convenientemente los diccionarios de los sistemas de reconocimiento (que tendría efectos tanto en sus vocabularios como en sus modelos de lenguaje). Aparte de las palabras dentro-del-vocabulario (In-Vocabulary, IV) que pudiéramos transcribir del habla, nos interesan más las palabras que no estuvieran ya presentes en el vocabulario de los sistemas, conocidas como palabras fuera-del-vocabulario (Out-Of-Vocabulary, OOV), ya que estas pueden indicar implícitamente los cambios de tópico o dominio que se pudieran producir. Así, proponemos estrategias para detectar palabras OOV en el habla, hallando las mejores palabras candidato para ellas y, en último término, aprender su sintaxis y semántica para que puedan ser introducidas convenientemente en los sitemas de reconocimiento, provocando modificaciones que resultarían en una adaptación, sobre todo considerando varias apariciones de OOVs y sus respectivas resoluciones. Las estrategias que proponemos en este trabajo de tesis pueden dividirse en dos niveles de operación, uno que funciona a un nivel estático y local, y otra que funciona a nivel dinámico y evolutivo. Las hemos llamado estrategia de Descubrimiento estático de Términos y estrategia de Vocabulario dinámico, respectivamente. Los procesos involucrados en la estrategia de Descubrimiento estático de Términos se enumeran a continuación: _ Detección de palabras OOV que puedan aparecer en el habla de entrada. Hemos contribuido con un par de métodos de detección de OOVs que pueden trabajar conjuntamente. El primer método emplea un modelo de palabra OOV definido en ambos términos acústicos y lingüísticos, mientras que el segundo método está basado en análisis de confianza aplicados a los grafos de palabra que los sistemas de reconocimiento entregan a su salida como resultado. _ Búsqueda de candidatos de palabra para cada OOV detectado. Hemos contribuido con un esquema que efectúa dos tipos diferentes de búsquedas, una dirigida por las evidencias acústicas, y otra dirigida por la semántica contextual observada. Para este esquema, hemos aprovechado fuentes de conocimiento externas donde poder encontrar los mejores candidatos posibles. Adicionalmente, también hemos propuesto un esquema de representación vectorial para los recursos que se pueden encontrar en corpora organizados en grafos. Este tipo de representaciones beneficiaría esta búsqueda, y aparte podrían ser empleados en otras tareas semánticas como las que se pueden encontrar en el área de procesado y comprensión de lenguaje natural. _ Correción de la transcripción de salida con el mejor candidato hallado, si lo hubiera. Hemos contribuido con un conjunto de puntuaciones a los candidatos encaminadas a mejorar la decisión acerca de si finalmente un candidato es lo suficientemente adecuado para sustituir el contenido original de la región donde detectamos una OOV. Y con respecto a la estrategia de Vocabulario dinámico, hemos propuesto una serie de procesos para ser ejecutados iterativamente según se precisen, y así reaccionar al habla que se esté procesando: _ Colección continua de los términos hallados por la estrategia de Descubrimiento estático de Términos, para de este modo valorar si un término adquiere el suficiente interés para ser incorporado al vocabulario del sistema. Hemos contribuido con un esquema de puntuación que tiene en cuenta las puntuaciones que la estrategia de Descubrimiento estático de Términos haya dado a una palabra y asimismo el tiempo que haya transcurrido entre los momentos en que fue encontrada, para de este modo dar mayor importancia a los nuevos términos que se hayan usado más recientemente. _ Selección de los términos que sean más interesantes, de entre todos los capturados por el anterior proceso, para ser incorporados en el vocabulario del sistema. También, selección de los términos IV menos interesantes para que sean eliminados del vocabulario. Hemos contribuido con sendos esquemas para la seleccion de términos para añadir y eliminar al/del vocabulario. En cuanto a las palabras a añadir, verificamos si hay suficiente material de entrenamiento en fuentes externas sobre los nuevos términos, y asimismo podemos reconsiderar si las correcciones realizadas a la transcripción por un término fueron lo suficientemente fiables o no, y así poder refinar nuestra decisión acerca de su incorporación al vocabulario. Y en cuanto a las palabras a eliminar, nuestra consideración se basa tanto en cuáles de las palabras IV no están siendo utilizadas lo suficiente en el habla de entrada, como qué palabras no encajan del todo bien en el estado actual del modelo de lenguaje del sistema, ya que este irá evolucionando. _ Actualización del vocabulario y el modelo de lenguaje de los sistemas de reconocimiento considerando las anteriores selecciones de palabras. Hemos contribuido con un esquema de actualización que considera, en primer lugar, el último modelo de lenguaje utilizado por el sistema y, en segundo lugar, un modelo de lenguaje construido con textos provenientes de fuentes externas donde aparezcan los nuevos términos, proponiendo asimismo un esquema de interpolación de ambos modelos para producir finalmente el nuevo modelo de lenguaje que contará con el vocabulario recién designado. Las estrategias propuestas han sido evaluadas en marcos de experimentación realistas, en los cuales hemos empleado reconocedores automáticos de habla acordes con el estado del arte, diseñados con diferentes tamaños de vocabulario, y consultado fuentes de conocimiento externas considerablemente grandes. En cuanto a los corpus de habla utilizados para la evaluación, hemos escogido unos que contienen una gran variedad de locutores y que recogen un habla natural y espontánea, donde múltiples temáticas son discutidas. Tales características entran en consonancia con las condiciones para las que queremos que nuestras estrategias ofrezcan beneficios. Los resultados de dicha evaluación nos permiten medir cuán significativas son las mejoras otorgadas a los sistemas de reconocimiento que se equipen con nuestras estrategias, y poder compararlos con sistemas que carezcan de ellas. De hecho, hemos logrado mejoras significativas sobre los sistemas de referencia para ambas estrategias y en la mayoría de las configuraciones de experimentación. Finalmente, también hemos podido estudiar el comportamiento de los sistemas dinámicos a lo largo del tiempo, y de este modo valorar cuán rápido y de qué manera se da la adaptación que procuramos. Se ha observado que, de media, los sistemas dinámicos fueron capaces de ofrecer mejoras significativas sobre los sistemas de referencia (e incluso sobre los sistemas equipados con la estrategia estática) tan sólo en unas horas de operación y para corpus de evaluación muy distintos. Asimismo, el tipo y número de palabras añadidas/borradas en el tiempo por los sistemas dinámicos son coherentes con el comportamiento esperado (principalmente, añadiendo palabras que realmente se emplearon en el habla de entrada y eliminando aquellas que no encajan suficientemente en el estado actual del modelo de lenguaje, que a su vez evoluciona en el tiempo), lo cual explica los beneficios tan notables que se han medido

    Attention-based word vector prediction with LSTMs and its application to the OOV problem in ASR

    Full text link
    We propose three architectures for a word vector prediction system (WVPS) built with LSTMs that consider both past and future contexts of a word for predicting a vector in an embedded space where its surrounding area is semantically related to the considered word. We introduce an attention mechanism in one of the architectures so the system is able to assess the specific contribution of each context word to the prediction. All the architectures are trained under the same conditions and the same training material, following a curricular-learning fashion in the presentation of the data. For the inputs, we employ pretrained word embeddings. We evaluate the systems after the same number of training steps, over two different corpora composed of ground-truth speech transcriptions in Spanish language from TCSTAR and TV recordings used in the Search on Speech Challenge of IberSPEECH 2018. The results show that we are able to reach significant differences between the architectures, consistently across both corpora. The attention-based architecture achieves the best results, suggesting its adequacy for the task. Also, we illustrate the usefulness of the systems for resolving out-of-vocabulary (OOV) regions marked by an ASR system capable of detecting OOV occurrence

    Estimating gravity component from accelerometers

    No full text

    ALBAYZIN 2016 spoken term detection evaluation: an international open competitive evaluation in Spanish

    Get PDF
    Within search-on-speech, Spoken Term Detection (STD) aims to retrieve data from a speech repository given a textual representation of a search term. This paper presents an international open evaluation for search-on-speech based on STD in Spanish and an analysis of the results. The evaluation has been designed carefully so that several analyses of the main results can be carried out. The evaluation consists in retrieving the speech files that contain the search terms, providing their start and end times, and a score value that reflects the confidence given to the detection. Two different Spanish speech databases have been employed in the evaluation: MAVIR database, which comprises a set of talks from workshops, and EPIC database, which comprises a set of European Parliament sessions in Spanish. We present the evaluation itself, both databases, the evaluation metric, the systems submitted to the evaluation, the results, and a detailed discussion. Five different research groups took part in the evaluation, and ten different systems were submitted in total. We compare the systems submitted to the evaluation and make a deep analysis based on some search term properties (term length, within-vocabulary/out-of-vocabulary terms, single-word/multi-word terms, and native (Spanish)/foreign terms)Xunta de Galicia | Ref. ED431G/01Ministerio de Economía y Competitividad | Ref. TEC2015-67163-C2-1-RMinisterio de Economía y Competitividad | Ref. TIN2014-54288-C4-1-RMinisterio de Economía y Competitividad | Ref. TEC2015-68172-C2-1-

    Spoken term detection ALBAYZIN 2014 evaluation: overview, systems, results, and discussion

    Get PDF
    Spoken term detection (STD) aims at retrieving data from a speech repository given a textual representation of the search term. Nowadays, it is receiving much interest due to the large volume of multimedia information. STD differs from automatic speech recognition (ASR) in that ASR is interested in all the terms/words that appear in the speech data, whereas STD focuses on a selected list of search terms that must be detected within the speech data. This paper presents the systems submitted to the STD ALBAYZIN 2014 evaluation, held as a part of the ALBAYZIN 2014 evaluation campaign within the context of the IberSPEECH 2014 conference. This is the first STD evaluation that deals with Spanish language. The evaluation consists of retrieving the speech files that contain the search terms, indicating their start and end times within the appropriate speech file, along with a score value that reflects the confidence given to the detection of the search term. The evaluation is conducted on a Spanish spontaneous speech database, which comprises a set of talks from workshops and amounts to about 7 h of speech. We present the database, the evaluation metrics, the systems submitted to the evaluation, the results, and a detailed discussion. Four different research groups took part in the evaluation. Evaluation results show reasonable performance for moderate out-of-vocabulary term rate. This paper compares the systems submitted to the evaluation and makes a deep analysis based on some search term properties (term length, in-vocabulary/out-of-vocabulary terms, single-word/multi-word terms, and in-language/foreign terms).Ministerio de Economía y Competitividad | Ref. TEC2012-37585-C02-01Xunta de Galicia | Ref. 2014/02

    ALBAYZIN 2016 spoken term detection evaluation: an international open competitive evaluation in Spanish

    Get PDF
    Abstract Within search-on-speech, Spoken Term Detection (STD) aims to retrieve data from a speech repository given a textual representation of a search term. This paper presents an international open evaluation for search-on-speech based on STD in Spanish and an analysis of the results. The evaluation has been designed carefully so that several analyses of the main results can be carried out. The evaluation consists in retrieving the speech files that contain the search terms, providing their start and end times, and a score value that reflects the confidence given to the detection. Two different Spanish speech databases have been employed in the evaluation: MAVIR database, which comprises a set of talks from workshops, and EPIC database, which comprises a set of European Parliament sessions in Spanish. We present the evaluation itself, both databases, the evaluation metric, the systems submitted to the evaluation, the results, and a detailed discussion. Five different research groups took part in the evaluation, and ten different systems were submitted in total. We compare the systems submitted to the evaluation and make a deep analysis based on some search term properties (term length, within-vocabulary/out-of-vocabulary terms, single-word/multi-word terms, and native (Spanish)/foreign terms)
    corecore